Using Automatic Machine Translation Metrics to Analyze the Impact of Source Reformulations

نویسندگان

  • Johann Roturier
  • Linda Mitchell
  • Robert Grabowski
  • Melanie Siegel
چکیده

This paper investigates the usefulness of automatic machine translation metrics when analyzing the impact of source reformulations on the quality of machinetranslated user generated content. We propose a novel framework to quickly identify rewriting rules which improve or degrade the quality of MT output, by trying to rely on automatic metrics rather than human judgments. We find that this approach allows us to quickly identify overlapping rules between two language pairs (EnglishFrench and English-German) and specific cases where the rules’ precision could be improved.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Correlation of Machine Translation Evaluation Metrics with Human Judgement on Persian Language

Machine Translation Evaluation Metrics (MTEMs) are the central core of Machine Translation (MT) engines as they are developed based on frequent evaluation. Although MTEMs are widespread today, their validity and quality for many languages is still under question. The aim of this research study was to examine the validity and assess the quality of MTEMs from Lexical Similarity set on machine tra...

متن کامل

Integrating Linguistic Information in Machine Translation Evaluation

The automatic evaluation of machine translation (MT) has been a very important factor driving the success of statistical machine translation for most of this decade. Prior to automatic metrics, researchers were forced to rely more heavily on human evaluations, which are costly and time-consuming. Automatic metrics allow systems to analyze and reduce errors while they train. Fully automatic mach...

متن کامل

Better Evaluation Metrics Lead to Better Machine Translation

Many machine translation evaluation metrics have been proposed after the seminal BLEU metric, and many among them have been found to consistently outperform BLEU, demonstrated by their better correlations with human judgment. It has long been the hope that by tuning machine translation systems against these new generation metrics, advances in automatic machine translation evaluation can lead di...

متن کامل

ORANGE: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation

Comparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as Pearson’s product moment correlation coefficient or Spearman’s rank order correlation coefficient between human scores and automatic scores. However, such comparisons rely on human judgments of translation qualities such as adequacy and fluency. Unfortun...

متن کامل

Learning the Impact of Machine Translation Evaluation Metrics for Semantic Textual Similarity

We present a work to evaluate the hypothesis that automatic evaluation metrics developed for Machine Translation (MT) systems have significant impact on predicting semantic similarity scores in Semantic Textual Similarity (STS) task for English, in light of their usage for paraphrase identification. We show that different metrics may have different behaviors and significance along the semantic ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012